Rice County
Uncertainty-preserving deep knowledge tracing with state-space models
Christie, S. Thomas, Cook, Carson, Rafferty, Anna N.
A central goal of both knowledge tracing and traditional assessment is to quantify student knowledge and skills at a given point in time. Deep knowledge tracing flexibly considers a student's response history but does not quantify epistemic uncertainty, while IRT and CDM compute measurement error but only consider responses to individual tests in isolation from a student's past responses. Elo and BKT could bridge this divide, but the simplicity of the underlying models limits information sharing across skills and imposes strong inductive biases. To overcome these limitations, we introduce Dynamic LENS, a modeling paradigm that combines the flexible uncertainty-preserving properties of variational autoencoders with the principled information integration of Bayesian state-space models. Dynamic LENS allows information from student responses to be collected across time, while treating responses from the same test as exchangeable observations generated by a shared latent state. It represents student knowledge as Gaussian distributions in high-dimensional space and combines estimates both within tests and across time using Bayesian updating. We show that Dynamic LENS has similar predictive performance to competing models, while preserving the epistemic uncertainty - the deep learning analogue to measurement error - that DKT models lack. This approach provides a conceptual bridge across an important divide between models designed for formative practice and summative assessment.
- North America > United States > Minnesota > Rice County > Northfield (0.04)
- North America > United States > Oregon > Washington County > Beaverton (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Education > Assessment & Standards (0.69)
- Education > Educational Setting > Online (0.47)
- Education > Educational Technology > Educational Software > Computer Based Training (0.46)
Voice Passing : a Non-Binary Voice Gender Prediction System for evaluating Transgender voice transition
Doukhan, David, Devauchelle, Simon, Girard-Monneron, Lucile, Ruz, Mía Chávez, Chaddouk, V., Wagner, Isabelle, Rilliard, Albert
This paper presents a software allowing to describe voices using a continuous Voice Femininity Percentage (VFP). This system is intended for transgender speakers during their voice transition and for voice therapists supporting them in this process. A corpus of 41 French cis- and transgender speakers was recorded. A perceptual evaluation allowed 57 participants to estimate the VFP for each voice. Binary gender classification models were trained on external gender-balanced data and used on overlapping windows to obtain average gender prediction estimates, which were calibrated to predict VFP and obtained higher accuracy than $F_0$ or vocal track length-based models. Training data speaking style and DNN architecture were shown to impact VFP estimation. Accuracy of the models was affected by speakers' age. This highlights the importance of style, age, and the conception of gender as binary or not, to build adequate statistical representations of cultural concepts.
- Europe > France (0.05)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- North America > United States > Minnesota > Rice County > Northfield (0.04)
- (2 more...)
Wideband Audio Waveform Evaluation Networks: Efficient, Accurate Estimation of Speech Qualities
Catellier, Andrew, Voran, Stephen
Wideband Audio Waveform Evaluation Networks (WAWEnets) are convolutional neural networks that operate directly on wideband audio waveforms in order to produce evaluations of those waveforms. In the present work these evaluations give qualities of telecommunications speech (e.g., noisiness, intelligibility, overall speech quality). WAWEnets are no-reference networks because they do not require ``reference'' (original or undistorted) versions of the waveforms they evaluate. Our initial WAWEnet publication introduced four WAWEnets and each emulated the output of an established full-reference speech quality or intelligibility estimation algorithm. We have updated the WAWEnet architecture to be more efficient and effective. Here we present a single WAWEnet that closely tracks seven different quality and intelligibility values. We create a second network that additionally tracks four subjective speech quality dimensions. We offer a third network that focuses on just subjective quality scores and achieves very high levels of agreement. This work has leveraged 334 hours of speech in 13 languages, over two million full-reference target values and over 93,000 subjective mean opinion scores. We also interpret the operation of WAWEnets and identify the key to their operation using the language of signal processing: ReLUs strategically move spectral information from non-DC components into the DC component. The DC values of 96 output signals define a vector in a 96-D latent space and this vector is then mapped to a quality or intelligibility value for the input waveform.
- North America > United States > Colorado > Boulder County > Boulder (0.28)
- North America > United States > Indiana (0.04)
- North America > United States > Wyoming (0.04)
- (5 more...)
- Telecommunications (1.00)
- Government > Regional Government (0.45)
Exploring Variational Auto-Encoder Architectures, Configurations, and Datasets for Generative Music Explainable AI
Bryan-Kinns, Nick, Zhang, Bingyuan, Zhao, Songyan, Banar, Berker
Generative AI models for music and the arts in general are increasingly complex and hard to understand. The field of eXplainable AI (XAI) seeks to make complex and opaque AI models such as neural networks more understandable to people. One approach to making generative AI models more understandable is to impose a small number of semantically meaningful attributes on generative AI models. This paper contributes a systematic examination of the impact that different combinations of Variational Auto-Encoder models (MeasureVAE and AdversarialVAE), configurations of latent space in the AI model (from 4 to 256 latent dimensions), and training datasets (Irish folk, Turkish folk, Classical, and pop) have on music generation performance when 2 or 4 meaningful musical attributes are imposed on the generative model. To date there have been no systematic comparisons of such models at this level of combinatorial detail. Our findings show that MeasureVAE has better reconstruction performance than AdversarialVAE which has better musical attribute independence. Results demonstrate that MeasureVAE was able to generate music across music genres with interpretable musical dimensions of control, and performs best with low complexity music such a pop and rock. We recommend that a 32 or 64 latent dimensional space is optimal for 4 regularised dimensions when using MeasureVAE to generate music across genres. Our results are the first detailed comparisons of configurations of state-of-the-art generative AI models for music and can be used to help select and configure AI models, musical features, and datasets for more understandable generation of music.
- North America > United States > New York > New York County > New York City (0.04)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (4 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Biden to kick off rural America tour with $5B pledge in Democratic challenger Dean Phillips' Minnesota
Democratic Rep. Dean Phillips, who's primary challenging the president, said he's'disappointed' Biden ally Rep. James Clyburn accused him of disrespecting Black voters. President Biden is kicking off his rural America tour Wednesday in Minnesota, the home state of Rep. Dean Phillips, who launched his 2024 Democratic presidential primary challenge just days ago. Biden is expected to announce $5 billion in new investments, including $1.7 billion in "climate-smart agriculture programs," $1 billion in broadband deployment, and some $2 billion in rural development programs. "I think there are obviously a lot of folks in Minnesota who understand and appreciate climate-smart agriculture and the enormous new income opportunities and environmental benefits that that accrues," U.S. Department of Agriculture Secretary Tom Vilsack told the Minneapolis Star Tribune. DEAN PHILLIPS SAYS VOTE BY'SQUAD' MEMBERS AGAINST RESOLUTION CONDEMNING HAMAS ATTACK ON ISRAEL IS'APPALLING' President Biden delivers remarks about government regulations on artificial intelligence systems during an event at the White House, Monday, Oct. 30, 2023.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.31)
- Asia > Middle East > Israel (0.26)
- North America > United States > New Hampshire > Hillsborough County > Manchester (0.06)
- (4 more...)
Network Embedding Using Sparse Approximations of Random Walks
In this paper, we propose an efficient numerical implementation of Network Embedding based on commute times, using sparse approximation of a diffusion process on the network obtained by a modified version of the diffusion wavelet algorithm. The node embeddings are computed by optimizing the cross entropy loss via the stochastic gradient descent method with sampling of low-dimensional representations of green functions. We demonstrate the efficacy of this method for data clustering and multi-label classification through several examples, and compare its performance over existing methods in terms of efficiency and accuracy. Theoretical issues justifying the scheme are also discussed.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Minnesota > Rice County > Northfield (0.04)
- North America > United States > Michigan > Ingham County > Lansing (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Clustering (0.48)
Explaining Math Word Problem Solvers
Automated math word problem solvers based on neural networks have successfully managed to obtain 70-80\% accuracy in solving arithmetic word problems. However, it has been shown that these solvers may rely on superficial patterns to obtain their equations. In order to determine what information math word problem solvers use to generate solutions, we remove parts of the input and measure the model's performance on the perturbed dataset. Our results show that the model is not sensitive to the removal of many words from the input and can still manage to find a correct answer when given a nonsense question. This indicates that automatic solvers do not follow the semantic logic of math word problems, and may be overfitting to the presence of specific words.
- Asia > Thailand > Bangkok > Bangkok (0.06)
- North America > United States > Virginia (0.04)
- North America > United States > Colorado > El Paso County > Colorado Springs (0.04)
- (7 more...)
Training Latent Variable Models with Auto-encoding Variational Bayes: A Tutorial
Auto-encoding Variational Bayes (AEVB) is a powerful and general algorithm for fitting latent variable models (a promising direction for unsupervised learning), and is well-known for training the Variational Auto-Encoder (VAE). In this tutorial, we focus on motivating AEVB from the classic Expectation Maximization (EM) algorithm, as opposed to from deterministic auto-encoders. Though natural and somewhat self-evident, the connection between EM and AEVB is not emphasized in the recent deep learning literature, and we believe that emphasizing this connection can improve the community's understanding of AEVB. In particular, we find it especially helpful to view (1) optimizing the evidence lower bound (ELBO) with respect to inference parameters as approximate E-step and (2) optimizing ELBO with respect to generative parameters as approximate M-step; doing both simultaneously as in AEVB is then simply tightening and pushing up ELBO at the same time. We discuss how approximate E-step can be interpreted as performing variational inference. Important concepts such as amortization and the reparametrization trick are discussed in great detail. Finally, we derive from scratch the AEVB training procedures of a non-deep and several deep latent variable models, including VAE, Conditional VAE, Gaussian Mixture VAE and Variational RNN. It is our hope that readers would recognize AEVB as a general algorithm that can be used to fit a wide range of latent variable models (not just VAE), and apply AEVB to such models that arise in their own fields of research. PyTorch code for all included models are publicly available.
- Instructional Material > Course Syllabus & Notes (0.64)
- Research Report (0.64)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Let Us Now Enjoy the Incredibly Pure Tale of the Teacher Who Invented em The Oregon Trail /em
Fifty years ago this winter, a young student teacher by the name of Don Rawitsch introduced his eighth grade American history class to a computer game on westward expansion that he had developed along with his colleagues Bill Heinemann and Paul Dillenberger. The game, called The Oregon Trail, would go on to sell over 65 million copies, many of them to educational institutions, making it one of the bestselling games of all time, right up there with Super Mario Bros. and Tetris. But when I talked to Rawitsch recently, he said that when he first came up with the idea, making money was the furthest thing from his mind. "Back in 1971, there was a lot of activity going on in the world of schools to upgrade curriculum and come up with innovative methods of teaching," Rawitsch said. Inspired by his teachers at Carleton College in Northfield, Minnesota, Rawitsch decided to pursue new types of pedagogy for his student teacher classes at Jordan Junior High School in Minneapolis.
- North America > United States > Oregon (0.70)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.25)
- Asia > Middle East > Jordan (0.25)
- North America > United States > Minnesota > Rice County > Northfield (0.25)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Education > Educational Setting > K-12 Education > Secondary School (0.55)
Recurrent Off-policy Baselines for Memory-based Continuous Control
When the environment is partially observable (PO), a deep reinforcement learning (RL) agent must learn a suitable temporal representation of the entire history in addition to a strategy to control. This problem is not novel, and there have been model-free and model-based algorithms proposed for this problem. However, inspired by recent success in model-free image-based RL, we noticed the absence of a model-free baseline for history-based RL that (1) uses full history and (2) incorporates recent advances in off-policy continuous control. Therefore, we implement recurrent versions of DDPG, TD3, and SAC (RDPG, RTD3, and RSAC) in this work, evaluate them on short-term and long-term PO domains, and investigate key design choices. Our experiments show that RDPG and RTD3 can surprisingly fail on some domains and that RSAC is the most reliable, reaching near-optimal performance on nearly all domains. However, one task that requires systematic exploration still proved to be difficult, even for RSAC. These results show that model-free RL can learn good temporal representation using only reward signals; the primary difficulty seems to be computational cost and exploration.
- North America > United States > Minnesota > Rice County > Northfield (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.48)